refactor: remove eq_outputs from validator receipts#1529
Conversation
Validators never generate eq_outputs, so they always produced empty dicts.
This change:
- Makes Receipt.eq_outputs optional (dict[int, str] | None = None)
- Only populates eq_outputs for leader receipts
- Omits eq_outputs from serialization for validator receipts
- Removes eq_outputs={} from all validator receipt constructors
- Handles None eq_outputs when decoding leader receipt for validator execution
All 579 backend unit tests, 60 DB tests, and 112 frontend tests pass.
Fixes DXP-36
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughReceipt.eq_outputs is made optional and populated only for leader executions; consensus builders and tests updated accordingly. Frontend gains API-key storage, fetch interceptor, and UI. Anonymous rate limits increased with an Alembic migration and integration-test guard added to fail if rate limiting is active. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/node/types.py`:
- Around line 257-261: The deserialization uses a truthiness check that treats
an empty dict as falsy, causing eq_outputs to become None; update the condition
around raw_eq in the eq_outputs assignment (the comprehension that currently
uses "if (raw_eq := input.get('eq_outputs'))") to explicitly check for None
(e.g., "if (raw_eq := input.get('eq_outputs')) is not None") so empty dicts
deserialize back to {} while still mapping keys via the existing comprehension
that converts keys to int.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7e051cb8-5e10-42a3-9cbd-0a70eaa2bb92
📒 Files selected for processing (7)
backend/consensus/base.pybackend/node/base.pybackend/node/types.pytests/unit/consensus/test_helpers.pytests/unit/consensus/test_validator_exec_timeout.pytests/unit/test_leader_llm_recovery.pytests/unit/test_set_vote.py
💤 Files with no reviewable changes (2)
- backend/consensus/base.py
- tests/unit/consensus/test_validator_exec_timeout.py
Empty dict {} is falsy in Python, so the truthiness check would
incorrectly convert a leader's empty eq_outputs to None on
deserialization. Use `is not None` to preserve the round-trip.
…t guard - Double free tier rate limits via Alembic migration (60/1000/10000) - Raise anonymous limits to 30/500/5000 in defaults - Add API key settings section in frontend settings page - Install global fetch interceptor to send API key on all RPC requests - Add integration test conftest that fails fast if rate limiting is on
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
tests/integration/conftest.py (1)
8-8: Add explicit return type to the fixture function.Use
-> Noneonensure_rate_limiting_disabledto satisfy the repository-wide typing rule.Suggested patch
-def ensure_rate_limiting_disabled(): +def ensure_rate_limiting_disabled() -> None:As per coding guidelines:
**/*.py: Include type hints in all Python code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/conftest.py` at line 8, Add an explicit return type annotation to the fixture function ensure_rate_limiting_disabled by changing its signature to include -> None; update the function definition for ensure_rate_limiting_disabled to declare it returns None (e.g., def ensure_rate_limiting_disabled() -> None:) so it satisfies the repository typing rule and linter checks.frontend/src/utils/apiKey.ts (2)
3-3: Share the storage-key constant instead of duplicating it.
API_KEY_STORAGE_KEYis duplicated here and infrontend/src/components/Simulator/settings/ApiKeySection.vue(Line 7). Export one source of truth to prevent drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/apiKey.ts` at line 3, Export the existing API_KEY_STORAGE_KEY constant from utils/apiKey.ts and use that single exported symbol instead of redefining it in the component; update ApiKeySection.vue to import API_KEY_STORAGE_KEY from utils/apiKey.ts and remove the duplicated const in the component so there is one source of truth for the storage key.
32-33: Avoid exact string equality for RPC URL matching.Line 32 (
url === rpcUrl) is brittle; equivalent URLs with trailing slash differences can skip API-key injection. Normalizing both sides before compare would make this resilient.Proposed refactor
+ const normalizeUrl = (value: string): string => { + const u = new URL(value); + return `${u.origin}${u.pathname}`.replace(/\/+$/, ''); + }; + const normalizedRpcUrl = normalizeUrl(rpcUrl); + globalThis.fetch = function ( input: RequestInfo | URL, init?: RequestInit, ): Promise<Response> { const url = input instanceof Request ? input.url : input.toString(); - if (url === rpcUrl) { + if (normalizeUrl(url) === normalizedRpcUrl) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/apiKey.ts` around lines 32 - 33, The equality check using `url === rpcUrl` is brittle and can miss matches due to trailing slashes or minor canonical differences; update the comparison in the function that uses `getApiKey()` and `rpcUrl` to normalize both values before comparing (e.g., trim trailing slashes and optionally lowercase/normalize host) so equivalent URLs match reliably, then proceed to inject the API key when the normalized values are equal.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/Simulator/settings/ApiKeySection.vue`:
- Around line 16-18: The saveApiKey routine trims the string for storage but
doesn't update the reactive apiKey state, causing a UI/storage mismatch (e.g.,
v-if="apiKey" still sees the untrimmed value); before calling
localStorage.setItem in function saveApiKey (and any other save handlers),
assign apiKey.value = apiKey.value.trim() and then persist API_KEY_STORAGE_KEY
using that trimmed apiKey.value, then call notify — this ensures the reactive
state and stored value are normalized and the UI (v-if="apiKey") reflects the
true saved value.
In `@frontend/src/utils/apiKey.ts`:
- Around line 35-40: When input is a Request, the current code builds headers
only from init?.headers and can drop headers present on the Request (e.g.,
Content-Type); update the logic that creates Headers (the Headers(...) call and
subsequent headers.set('X-API-Key', apiKey)) to merge Request headers and
init.headers: if input is a Request, start with new Headers(input.headers), then
iterate over init?.headers to copy/override into that Headers object, then only
set 'X-API-Key' when not already present; keep using the same headers variable
and assign back to init = { ...init, headers } so existing Request headers are
preserved.
In `@tests/integration/conftest.py`:
- Around line 18-31: The probe request using requests.post in conftest.py
currently has no timeout and can hang; update the call in the probe logic to
pass an explicit timeout (e.g. timeout=5) and wrap the request in a try/except
that catches requests.exceptions.RequestException (including ConnectionError and
Timeout), calling pytest.exit with a clear message and nonzero returncode when a
connection error/timeout occurs; preserve the existing resp.status_code == 429
handling for rate-limiting when the request succeeds.
---
Nitpick comments:
In `@frontend/src/utils/apiKey.ts`:
- Line 3: Export the existing API_KEY_STORAGE_KEY constant from utils/apiKey.ts
and use that single exported symbol instead of redefining it in the component;
update ApiKeySection.vue to import API_KEY_STORAGE_KEY from utils/apiKey.ts and
remove the duplicated const in the component so there is one source of truth for
the storage key.
- Around line 32-33: The equality check using `url === rpcUrl` is brittle and
can miss matches due to trailing slashes or minor canonical differences; update
the comparison in the function that uses `getApiKey()` and `rpcUrl` to normalize
both values before comparing (e.g., trim trailing slashes and optionally
lowercase/normalize host) so equivalent URLs match reliably, then proceed to
inject the API key when the normalized values are equal.
In `@tests/integration/conftest.py`:
- Line 8: Add an explicit return type annotation to the fixture function
ensure_rate_limiting_disabled by changing its signature to include -> None;
update the function definition for ensure_rate_limiting_disabled to declare it
returns None (e.g., def ensure_rate_limiting_disabled() -> None:) so it
satisfies the repository typing rule and linter checks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 71c0db93-d13f-464f-be05-352ff57d8763
📒 Files selected for processing (8)
.env.examplebackend/database_handler/migration/versions/c3d7f2a8b104_double_free_tier_rate_limits.pydocker-compose.ymlfrontend/src/components/Simulator/settings/ApiKeySection.vuefrontend/src/main.tsfrontend/src/utils/apiKey.tsfrontend/src/views/Simulator/SettingsView.vuetests/integration/conftest.py
| function saveApiKey() { | ||
| localStorage.setItem(API_KEY_STORAGE_KEY, apiKey.value.trim()); | ||
| notify({ |
There was a problem hiding this comment.
Normalize apiKey state before saving to avoid UI/storage mismatch.
On Line 17 the trimmed value is persisted, but apiKey.value itself remains untrimmed. Combined with Line 71 (v-if="apiKey"), the UI can indicate a key exists when persisted value is empty.
Proposed fix
function saveApiKey() {
- localStorage.setItem(API_KEY_STORAGE_KEY, apiKey.value.trim());
+ apiKey.value = apiKey.value.trim();
+ if (apiKey.value) {
+ localStorage.setItem(API_KEY_STORAGE_KEY, apiKey.value);
+ } else {
+ localStorage.removeItem(API_KEY_STORAGE_KEY);
+ }
notify({
title: 'API key saved',
type: 'success',
});
}
Also applies to: 71-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/Simulator/settings/ApiKeySection.vue` around lines 16
- 18, The saveApiKey routine trims the string for storage but doesn't update the
reactive apiKey state, causing a UI/storage mismatch (e.g., v-if="apiKey" still
sees the untrimmed value); before calling localStorage.setItem in function
saveApiKey (and any other save handlers), assign apiKey.value =
apiKey.value.trim() and then persist API_KEY_STORAGE_KEY using that trimmed
apiKey.value, then call notify — this ensures the reactive state and stored
value are normalized and the UI (v-if="apiKey") reflects the true saved value.
| const headers = new Headers(init?.headers); | ||
| if (!headers.has('X-API-Key')) { | ||
| headers.set('X-API-Key', apiKey); | ||
| } | ||
| init = { ...init, headers }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether Request objects are used with fetch in frontend code paths.
rg -nP --type=ts --type=tsx --type=vue -C2 '\bnew\s+Request\s*\('
rg -nP --type=ts --type=tsx --type=vue -C2 '\bfetch\s*\(\s*new\s+Request\s*\('
Repository: genlayerlabs/genlayer-studio
Length of output: 127
🏁 Script executed:
#!/bin/bash
# First, let's check if the file exists and read it
echo "=== Checking apiKey.ts file ==="
cat -n frontend/src/utils/apiKey.ts | head -50
echo ""
echo "=== Searching for Request object instantiation ==="
rg -n 'new\s+Request' --type=ts -C2
echo ""
echo "=== Searching for fetch calls with Request ==="
rg -n 'fetch\s*\(' --type=ts -C3 | grep -A3 -B3 "Request"
echo ""
echo "=== Checking for fetch usage patterns in frontend ==="
fd -e ts -e js frontend/src | xargs rg -l 'fetch' | head -10
Repository: genlayerlabs/genlayer-studio
Length of output: 2632
🏁 Script executed:
#!/bin/bash
# Let's check how the SDK is being used and if Request objects could be passed
echo "=== Searching for installApiKeyFetchInterceptor usage ==="
rg -n 'installApiKeyFetchInterceptor' --type=ts
echo ""
echo "=== Searching for genlayer-js SDK usage ==="
rg -n 'from.*genlayer' --type=ts
echo ""
echo "=== Checking package.json for genlayer-js ==="
cat package.json | grep -A2 -B2 'genlayer'
Repository: genlayerlabs/genlayer-studio
Length of output: 2817
🏁 Script executed:
#!/bin/bash
# Let's examine how the genlayer-js SDK might be making fetch calls
echo "=== Looking for GenLayerClient usage ==="
rg -n 'createClient' --type=ts -A 5 | head -30
echo ""
echo "=== Checking useGenlayer hook ==="
cat -n frontend/src/hooks/useGenlayer.ts | head -40
Repository: genlayerlabs/genlayer-studio
Length of output: 2420
🏁 Script executed:
#!/bin/bash
# Let's verify the actual Fetch API behavior and check if there's any documentation
echo "=== Looking at how fetch is being called in the interceptor context ==="
cat -n frontend/src/utils/apiKey.ts
echo ""
echo "=== Checking if there are any direct fetch calls in the codebase ==="
rg -n 'fetch\s*\(' --type=ts frontend/src | head -20
Repository: genlayerlabs/genlayer-studio
Length of output: 1924
🏁 Script executed:
#!/bin/bash
# Let's check the rpc.ts file to see how fetch is being used
echo "=== Checking rpc.ts fetch usage ==="
cat -n frontend/src/clients/rpc.ts | sed -n '35,50p'
echo ""
echo "=== Verify the Fetch API spec behavior documentation ==="
cat -n frontend/src/utils/apiKey.ts | sed -n '26,43p'
Repository: genlayerlabs/genlayer-studio
Length of output: 1277
Preserve existing request headers when input is a Request.
On Line 35, headers are built from init?.headers only. If a caller passes a Request object with existing headers, Line 39 can overwrite them, which may drop required headers (e.g., Content-Type). Since the genlayer-js SDK makes its own fetch calls, this interceptor must properly merge headers from both sources.
Proposed fix
- const headers = new Headers(init?.headers);
+ const headers = new Headers();
+ if (input instanceof Request) {
+ input.headers.forEach((value, key) => headers.set(key, value));
+ }
+ if (init?.headers) {
+ new Headers(init.headers).forEach((value, key) =>
+ headers.set(key, value),
+ );
+ }
if (!headers.has('X-API-Key')) {
headers.set('X-API-Key', apiKey);
}
init = { ...init, headers };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/utils/apiKey.ts` around lines 35 - 40, When input is a Request,
the current code builds headers only from init?.headers and can drop headers
present on the Request (e.g., Content-Type); update the logic that creates
Headers (the Headers(...) call and subsequent headers.set('X-API-Key', apiKey))
to merge Request headers and init.headers: if input is a Request, start with new
Headers(input.headers), then iterate over init?.headers to copy/override into
that Headers object, then only set 'X-API-Key' when not already present; keep
using the same headers variable and assign back to init = { ...init, headers }
so existing Request headers are preserved.
| resp = requests.post( | ||
| url, | ||
| data=json.dumps( | ||
| {"jsonrpc": "2.0", "method": "eth_chainId", "params": [], "id": 1} | ||
| ), | ||
| headers={"Content-Type": "application/json"}, | ||
| ) | ||
| if resp.status_code == 429: | ||
| pytest.exit( | ||
| "Rate limiting is enabled on the backend. " | ||
| "Set RATE_LIMIT_ENABLED=false in .env and restart containers " | ||
| "before running integration tests.", | ||
| returncode=1, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify requests.* calls in this file include an explicit timeout keyword.
python - <<'PY'
import ast
from pathlib import Path
path = Path("tests/integration/conftest.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.Call) and isinstance(node.func, ast.Attribute):
if isinstance(node.func.value, ast.Name) and node.func.value.id == "requests":
has_timeout = any(k.arg == "timeout" for k in node.keywords if isinstance(k, ast.keyword))
print(f"{path}:{node.lineno} requests.{node.func.attr} timeout={has_timeout}")
PY
Repository: genlayerlabs/genlayer-studio
Length of output: 132
Add timeout and connection-error handling to the probe request.
The requests.post call on line 18 lacks a timeout parameter and will hang the entire test session if the backend is unreachable. Add explicit timeout and exception handling:
Suggested patch
for _ in range(15):
- resp = requests.post(
- url,
- data=json.dumps(
- {"jsonrpc": "2.0", "method": "eth_chainId", "params": [], "id": 1}
- ),
- headers={"Content-Type": "application/json"},
- )
+ try:
+ resp = requests.post(
+ url,
+ json={"jsonrpc": "2.0", "method": "eth_chainId", "params": [], "id": 1},
+ timeout=(2, 5),
+ )
+ except requests.RequestException as exc:
+ pytest.exit(
+ f"Cannot reach backend at {url}: {exc}. "
+ "Ensure containers are running before integration tests.",
+ returncode=1,
+ )
if resp.status_code == 429:
pytest.exit(
"Rate limiting is enabled on the backend. "
"Set RATE_LIMIT_ENABLED=false in .env and restart containers "
"before running integration tests.",
returncode=1,
)
🧰 Tools
🪛 Ruff (0.15.6)
[error] 18-18: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/conftest.py` around lines 18 - 31, The probe request using
requests.post in conftest.py currently has no timeout and can hang; update the
call in the probe logic to pass an explicit timeout (e.g. timeout=5) and wrap
the request in a try/except that catches requests.exceptions.RequestException
(including ConnectionError and Timeout), calling pytest.exit with a clear
message and nonzero returncode when a connection error/timeout occurs; preserve
the existing resp.status_code == 429 handling for rate-limiting when the request
succeeds.
| }); | ||
|
|
||
| function saveApiKey() { | ||
| localStorage.setItem(API_KEY_STORAGE_KEY, apiKey.value.trim()); |
Check failure
Code scanning / CodeQL
Clear text storage of sensitive information High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 days ago
In general, to fix this kind of problem you should avoid storing secrets directly in browser storage; instead, store either (a) nothing, requiring re‑entry, (b) a short-lived opaque token from a backend, or (c) an encrypted form of the secret, where the encryption key is not stored alongside the ciphertext (e.g., derived from a user-provided passphrase).
For this specific component, the minimal invasive fix is:
- Introduce simple symmetric encryption and decryption using the Web Crypto API (available in the browser) or a small helper, without adding backend dependencies.
- Derive a per-user encryption key from a passphrase (which the user must enter each time they want to unlock/use the stored API key) using PBKDF2.
- When saving the API key:
- Prompt (via a simple extra field) for a passphrase.
- Derive a key from the passphrase, generate a random salt and IV, encrypt the API key, and store only the ciphertext bundle (salt + iv + ciphertext, base64) in
localStorageunder the sameAPI_KEY_STORAGE_KEY.
- When loading:
- Read the encrypted blob from
localStorageand keep it as an encrypted value in state. - Only decrypt it when the user supplies the correct passphrase (e.g., via a “Unlock” button), at which point you derive the key again and decrypt to fill
apiKey.value. This way, the cleartext key is not persisted across reloads and exists only in memory.
- Read the encrypted blob from
To keep the change constrained to the given file:
- Replace the direct
localStorage.getItem/setItemuse for the cleartext API key with encryption/decryption helpers that operate on strings and usewindow.crypto.subtle. - Add new reactive refs for
encryptedApiKey,passphrase, and maybehasStoredKey. - Adjust the template to include a passphrase input and separate “Save encrypted” / “Unlock stored key” buttons.
- The key technical pieces to add:
- An async
deriveKey(passphrase: string, salt: Uint8Array): Promise<CryptoKey> - An async
encryptApiKey(plainKey: string, passphrase: string): Promise<string> - An async
decryptApiKey(encryptedBundle: string, passphrase: string): Promise<string>
- An async
- Keep existing notifications and masking behavior unchanged, extending them to handle the new flows.
|
🎉 This PR is included in version 0.111.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
What
Made
Receipt.eq_outputsoptional since validators never generate it. Changes:eq_outputsoptional (dict[int, str] | None = None) in Receipteq_outputsfor leader receipts; validators getNoneeq_outputsfrom JSON serialization for validator receiptseq_outputs={}from all validator receipt constructors in testsWhy
Validators always produced empty
eq_outputsdicts with no actual data. Making it optional reduces unnecessary storage and clearer semantics in the data model.Testing done
Decisions made
eq_outputs(always set to dict, even if empty)eq_outputs = None(omitted from JSON)Nonewhen decoding leader receipt for validator executionFixes DXP-36
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores